Goto

Collaborating Authors

 adversarial image



A Broader Impacts

Neural Information Processing Systems

MIM to enhance the adversarial robustness of downstream models. It is important to highlight that our paper's focus is specifically on the adversarial robustness of ViTs. It is shown that our method can provide an effective defense against severe adversarial attacks. We propose two hypotheses for explaining the reason behind our method's effectiveness: (1) Given Figure 3 (a) shows the comparison between the results of noise being known and unknown. When the attacker can access the noise, our model's robust accuracy does not improve much as The results indicate that both proposed hypotheses are true.



A New Defense Against Adversarial Images: Turning a Weakness into a Strength

Shengyuan Hu, Tao Yu, Chuan Guo, Wei-Lun Chao, Kilian Q. Weinberger

Neural Information Processing Systems

While many techniques for detecting these attacks have been proposed, theyareeasily bypassed when theadversary hasfullknowledge of the detection mechanism and adapts the attack strategy accordingly. In this paper,we adopt anovel perspectiveand regard the omnipresence of adversarial perturbations asastrength rather thanaweakness.